42 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationWith the spread of internet and mobile devices, transferring information safely and securely has become more important than ever. Finite fields have widespread applications in such domains, such as in cryptography, error correction codes, among many others. In most finite field applications, the field size - and therefore the bit-width of the operands - can be very large. The high complexity of arithmetic operations over such large fields requires circuits to be (semi-) custom designed. This raises the potential for errors/bugs in the implementation, which can be maliciously exploited and can compromise the security of such systems. Formal verification of finite field arithmetic circuits has therefore become an imperative. This dissertation targets the problem of formal verification of hardware implementations of combinational arithmetic circuits over finite fields of the type F2k . Two specific problems are addressed: i) verifying the correctness of a custom-designed arithmetic circuit implementation against a given word-level polynomial specification over F2k ; and ii) gate-level equivalence checking of two different arithmetic circuit implementations. This dissertation proposes polynomial abstractions over finite fields to model and represent the circuit constraints. Subsequently, decision procedures based on modern computer algebra techniques - notably, Gr篓obner bases-related theory and technology - are engineered to solve the verification problem efficiently. The arithmetic circuit is modeled as a polynomial system in the ring F2k [x1, x2, 路 路 路 , xd], and computer algebrabased results (Hilbert's Nullstellensatz) over finite fields are exploited for verification. Using our approach, experiments are performed on a variety of custom-designed finite field arithmetic benchmark circuits. The results are also compared against contemporary methods, based on SAT and SMT solvers, BDDs, and AIG-based methods. Our tools can verify the correctness of, and detect bugs in, up to 163-bit circuits in F2163 , whereas contemporary approaches are infeasible beyond 48-bit circuits

    Signal Timing Optimization to Improve Air Quality

    Get PDF
    This study develops an optimization methodology for signal timing at intersections to reduce emissions based on MOVES, the latest emission model released by U.S. Environmental Protection Agency (EPA). The primary objective of this study is to bridge the gap that the research on signal optimization at intersections lags behind the development of emissions models. The methodology development includes four levels: the vehicle level, the movement level, the intersection level, and the arterial level. At the vehicle level, the emission function with respect to delay is derived for a vehicle driving through an intersection. Multiple acceleration models are evaluated, and the best one is selected in terms of emission estimations at an intersection. Piecewise functions are used to describe the relationship between emissions and intersection delay. At the movement level, emissions are modeled if the green time and red time of a movement are given. To account for randomness, the number of vehicle arrivals during a cycle is assumed to follow Poisson distributions. According to the numerical results, the relative difference of emission estimations with and without considering randomness is usually smaller than 5.0% at a typical intersection of two urban arterials. At the intersection level, an optimization problem is formulated to consider emissions at an intersection. The objective function is a linear combination of delay and emissions at an intersection, so that the tradeoff between the two could be examined with the optimization problem. In addition, a convex approximation is proposed to approximate the emission calculation; accordingly, the optimization problem can be solved more efficiently using the interior point algorithm (IPA). The case study proves that the optimization problem with this convex approximation can still find appropriate optimal signal timing plans when considering traffic emissions. At the arterial level, emissions are minimized at multiple intersections along an arterial. First, discrete models are developed to describe the bandwidth, stops, delay, and emissions at a particular intersection. Second, based on these discrete models, an optimization problem is formulated with the intersection offsets as decision variables. The simulation results indicate that the benefit of emission reduction become more and more significant as the number of intersections along the arterial increases

    DialoGPS: Dialogue Path Sampling in Continuous Semantic Space for Data Augmentation in Multi-Turn Conversations

    Full text link
    In open-domain dialogue generation tasks, contexts and responses in most datasets are one-to-one mapped, violating an important many-to-many characteristic: a context leads to various responses, and a response answers multiple contexts. Without such patterns, models poorly generalize and prefer responding safely. Many attempts have been made in either multi-turn settings from a one-to-many perspective or in a many-to-many perspective but limited to single-turn settings. The major challenge to many-to-many augment multi-turn dialogues is that discretely replacing each turn with semantic similarity breaks fragile context coherence. In this paper, we propose DialoGue Path Sampling (DialoGPS) method in continuous semantic space, the first many-to-many augmentation method for multi-turn dialogues. Specifically, we map a dialogue to our extended Brownian Bridge, a special Gaussian process. We sample latent variables to form coherent dialogue paths in the continuous space. A dialogue path corresponds to a new multi-turn dialogue and is used as augmented training data. We show the effect of DialoGPS with both automatic and human evaluation.Comment: ACL 2023 mai

    The construction of a prognostic model of cervical cancer based on four immune-related LncRNAs and an exploration of the correlations between the model and oxidative stress

    Get PDF
    Introduction: The immune-related lncRNAs (IRLs) are critical for the development of cervical cancer (CC), but it is still unclear how exactly ILRs contribute to CC. In this study, we aimed to examine the relationship between IRL and CC in detail.Methods: First, the RNAseq data and clinical data of CC patients were collected from The Cancer Genome Atlas (TCGA) database, along with the immune genes from the Import database. We used univariate cox and least absolute shrinkage and selection operator (lasso) to obtain IRLs for prediction after screening the variables. According to the expression levels and risk coefficients of IRLs, the riskscore were calculated. We analyzed the relationship between the model and oxidative stress. We stratified the risk model into two as the high and low-risk groups. We also evaluated the survival differences, immune cell differences, immunotherapeutic response differences, and drug sensitivity differences between the risk groups. Finally, the genes in the model were experimentally validated.Results: Based on the above analyses, we further selected four IRLs (TFAP2A.AS1, AP000911.1, AL133215.2, and LINC02078) to construct the risk model. The model was associated with oxidative-stress-related genes, especially SOD2 and OGG1. Patients in the high-risk group had a lower overall survival than those in the low-risk group. Riskscore was positively correlated with resting mast cells, neutrophils, and CD8+ T-cells. Patients in the low-risk group showed a greater sensitivity to immunosuppression therapy. In addition, we found that patients with the PIK3CA mutation were more sensitive to chemotherapeutic agents such as dasatinib, afatinib, dinaciclib and pelitinib. The function of AL133215.2 was verified, which was consistent with previous findings, and AL133215.2 exerted a pro-tumorigenic effect. We also found that AL133215.2 was closely associated with oxidative-stress-related pathways.Discussion: The results suggested that risk modeling might be useful for prognosticating patients with CC and opening up new routes for immunotherapy

    Lockup Agreements during Equity Issuance

    Get PDF
    漏 2017 Dr. Jinpeng LvInformation in the equity issuance market is highly asymmetric. Issuers have information advantages over investors and underwriters. Under asymmetric information, in the U.S., insiders from issuing companies and the underwriters voluntarily negotiate lockup agreements before the issuance of equity. Lockup agreements restrict insiders from selling their shares during the lockup period. However, underwriters have the right to release some or all of the locked-up shares, allowing insiders to sell their shares early at any time before the lockup expiration. Early sales refer to these insider sales during lockup periods. Lockups commonly exist in both initial public offerings (IPOs) and seasoned equity offerings (SEOs). This thesis investigates both the underwriters' incentives for early releases of locked-up shares during the IPO and the impact of IPO lockups on the decision to include SEO lockups. First, I study underwriters' incentives for early releases during an IPO. Ten percent of IPOs with lockup agreements have early sales by top executives. Early sales reduce the likelihood that IPO companies switch lead underwriters in their subsequent SEOs. IPO companies with early sales have better post-IPO performance than their counterparts without early sales. I argue that early sales reduce the signaling cost incurred by IPO lockups under asymmetric information. As information resolves after the IPO, good companies exercise early sales and directly benefit from the reduction in the signaling cost, while underwriters benefit from an increase in future business. Second, I examine the relation between IPO and SEO lockups. I find that underwriters are more likely to impose SEO lockups on issuers that have IPO lockups. I focus on a sample of issuers that conduct their first SEOs within four years after the IPO. I attribute the positive relation between SEO and IPO lockups partially to high correlations between company characteristics at the times of the IPO and the SEO. However, the commitment level of insiders in the issuing company does not offer an explanation for the positive relation between IPO and SEO lockups. Rather, the positive share price response to the announcement of the change from including lockups at the IPO to waiving lockups at the SEO implies that this change by underwriters conveys good news to the market, consistent with SEO lockups helping to reduce the information asymmetry in the equity issuance market

    Microstructure and Mechanical Properties of TiC0.7N0.3-HfC-WC-Ni-Mo Cermet Tool Materials

    No full text
    TiC0.7N0.3-HfC-WC-Ni-Mo cermet tool materials were fabricated by hot pressing technology at 1450 °C. The effects of WC (tungsten carbide) content on the microstructure and mechanical properties of TiC0.7N0.3-HfC-WC-Ni-Mo cermet tool materials were investigated. The results showed that the TiC0.7N0.3-HfC-WC-Ni-Mo cermets were mainly composed of TiC0.7N0.3, Ni, and (Ti, Hf, W, Mo)(C, N); there were three phases: a dark phase, a gray phase, and a light gray phase. The dark phase was the undissolved TiC0.7N0.3, the gray phase was the solid solution (Ti, Hf, W, Mo)(C, N) poor in Hf, W, and Mo, and the light gray phase was the solid solution (Ti, Hf, W, Mo)(C, N) rich in Hf, W, and Mo. The increase of WC content could promote the process of HfC to form a solid solution and the HfC formed a solid solution more easily with WC than with TiCN. The increase of the solid solution made the microstructure more uniform and the mechanical properties better. In addition, the Vickers hardness, flexural strength, and fracture toughness of the TiC0.7N0.3-HfC-WC-Ni-Mo cermet increased with the increase of WC content. When the content of WC was 32 wt %, the cermet obtained the optimal comprehensive mechanical properties in this investigation. The toughening mechanism of TiC0.7N0.3-HfC-WC-Ni-Mo cermet tool materials included solid solution toughening, particle dispersion toughening, crack bridging, and crack deflection

    Efficient Gr枚bner Basis Reductions for Formal Verification of Galois Field Arithmetic Circuits

    No full text

    Methodology and guidelines for regulating traffic flows under air quality constraints in metropolitan areas

    No full text
    "This project developed a methodology to couple a new pollutant dispersion model with a traffic assignment process to contain air pollution while maximizing mobility.
    corecore